The SEO industry, like many others, has private forums, chat threads and groups of connected individuals whose interactions happen largely behind closed doors. Today, I’d like to pull back a curtain and share a debate that occurred between a number of CEOs in the search marketing industry over the last few days that I think you’ll find both fascinating, and hopefully, valuable, too.
The topic is the concept that content quality is highly correlated or predictive of high rankings in the search engines. This isn’t a cut-and-dry debate, but a more nuanced and, yes, subjective look at content quality from a wide range of perspectives.
First, I’ll introduce our players (these are just the folks who agreed to have their contributions published), after which we can dive into the discussion:
StephanΒ SpencerΒ is VP of SEO Strategies at Covario,Β co-author ofΒ The Art of SEO,Β founder of Netconcepts (recently acquired by Covario), and inventor of the GravityStream SEO proxy technology (now rebranded as Covario’sΒ Organic Search Optimizer). | |
Gord Hotchkiss is the President and CEO of Enquiro, author of the BuyerSphere Project and a leading expert and research on online and search user behavior. | |
Thad Kahlow is the CEO of BusinessOnLine, one of the nationβs leading online marketing agencies, successfully launching hundreds of solutions for clients including American Red Cross, Caterpillar, Sony, NEC, Sybase, and Hasbro, to name a few. | |
Eric Enge is the President of Stone Temple Consulting, a 16 person SEO and PPC consulting firm with offices in Boston and Northern California. Eric is co-author of The Art of SEO from O’Reilly. | |
Chris BaggottΒ co-founded ExactTarget and authored the popular book: Email Marketing By The Numbers. Β He is currently co-founder/CEO of Compendium an Enterprise Social Content Publishing software and writes about Best Practices for Blogging on his own blog.Β | |
Β
Richard Zwicky the Founder and President of Eightfold Logic (formerly known as Enquisite), a predictive insights and search and social analytics platform used by enterprises and agencies around the world. A serial entrepreneur, Richard is the author of multiple patents and has been involved in online marketing since the late 1990’s.
|
|
Lawrence Coburn is the CEO and co-founder of DoubleDutch – the first white label geolocation platform.Β He is also an Editor at The Next Web’s geolocation blog, and a mentor at io ventures – a San Francisco based startup incubator. | |
Will Critchlow is a co-founder of Distilled, a London & Seattle based SEO consultancy. He speaks regularly at industry conferences on analytics, data-driven optimization and data visualization. | |
Β | Β Rand Fishkin is… the author of this post π |
Thad Kahlow (in reference to these three posts):
Great Content β Great Rankings
We disagree.Β Β Great content is so important to ranking well.Β It may not be the only factor, that content has to be found, be newsworthy and incorporate keywords that will drive traffic but the basic principle that content is a huge factor to generate competitive rankings.
Rand Fishkin:
Great contentΒ β Great Rankings
I’ll fight tooth and nail on this one. Great content is a really good thing to do for many reasons, but I’d doubt it correlates to great rankings any better than PageRankΒ does (or doesn’t).
Eric Enge:
Speaking of fighting tooth and nail, you have now earned my $0.02 (:>). Great content may not equate to great rankings by itself, but if we look at the integrated whole of a web marketing strategy, where link building and social media promotion are the driving components of success, great content is a MUST.Β I think it would be a disservice to put anything out there that suggests otherwise.Β Accordingly, I would suggest:
“In the absence of a marketing strategy to leverage it, great content will not necessarily drive great rankings, but if you are looking to create a major web property (for your market space) then great content is a requirement.Β It’s impact on the promotion of your web site is fundamental.Β Obtaining links and getting positive feedback from social media communities is far easier with great content.”
Stephan Spencer:
Great content doesn’t automatically mean great rankings. In other words, it is not a foregone conclusion that great content will necessarily rank just because of its quality. The content may deserve to be ranked, but if no one knows about it,Β or if the site architecture is so atrocious that it repels the spiders, then it wonβt rank. Itβs as important to actively promote that great content as to have created it. IβmΒ simply making an argument against that tired old phrase βBuild it and they will come.β Donβt let my comments dissuade you from creating high quality content though! Indeed,Β itβs a likely prerequisite for SEO success, especially when the keywords being targeted are highly competitive.
Chris Baggott:
So if Eric is $0.02…..I’m $0.001 Β π
I’d like to chime in on the content question. Isn’t this an issue of competition?
I wrote a post talking about one of Rand’s slides at Web 2.0 showing 4 word phrases being the highest converting. We took a look at our own client base and found exactly the same correlation. The client I use in the example has decent domain authority but not much else other than content and categories specifically relevant to the longer tail terms they are targeting. When we talk about SEO don’t we need to differentiate the fat head tactics from the long tail tactics?
Rand you made a great case for the long tail and conversion in your deck. Vanessa Fox in her new book makes the statement that 56% of all searches return no ads. As the world starts to appreciate that the tail is only going to get longer it seems like content is going to be getting more an more important. Am I crazy making the assumption that the lower the query competition the bigger the role of content relevance, recency and frequency plays in driving high converting traffic?
Rand Fishkin:
Agreed – for the long tail, domain authority + enough juice to get lots of pages indexed + the mere mention of the phrase combo = you’re often ranking top 5
Eric – agree with you as well, it certainly makes many things easier, but so many people in our industry (and outside of it) think and promote the idea that “great content” (which, IMO, has been repeated so often it’s nearly lost meaning) will get you rankings. Great marketing will get you rankings, often regardless or in spite of content quality.
Lawrence Coburn:
I agree wholeheartedly.Β To do well in the tail, you need deep (though not necessarily high quality) content.Β To match 4-5-6 word queries at scale, you need to have a lot of content to draw from.
On a related note, the May Day update changed something around exactly these sorts of queries, and for us, not for the better.Β I’m curious as to where the traffic that was going to us, and other big, broad, content sites, is now going.
Rand Fishkin:
Yeah – we also took about a 10% hit in the tail of search traffic from Google to seomoz.org and that was weird. Previous updates have always only helped us do better or stayed the same. Digging into traffic data, it appears to be fewer pages receiving any traffic, which tells me it’s most likely an indexation issue – Google getting pickier about what it keeps in the index.
Thad Kahlow:
Re: great content = great rankings.
I agree, and donβt think many could disagree, that creating great content, upon itself will deliver great rankings.Β But when we look at this issue from a much broader context (30k ft), Googleβs mission is to provide the most relevant experience (not just SERPs). Better content provides a better experience.
So I digress because I believe this topic addresses a systemic ailment within search (may piss off a few ole schoolβs with this one)β¦ but we as SEOs spend significantly too much time obsessing (me included) Β over algorithmic loop holes, updates, dances, undulationsβ¦to the point of reaching diminished returns.Β And I humbly (as much as I can be) suggest that if we as SEOs spent more time with our clients focusing on the end users needs when launching a search campaign and built unique, relevant content, and less focus on the extreme nuisances on the algo (yes, you need an extremely sound SEO best practices foundation +some)β¦ Google, the client and most importantly the end users are better served = Search Industry wins.Β Otherwise, we are all fighting the battle of βout- optimizingβ the other and not on the ultimate mission- winning the βrelevant experienceβ war.
In sum- a significant focus on creating creative relevant content should be a major focus of every search solution, yet far few do.
Gord Hotchkiss:
Couldnβt agree more with Thad (surprise, surprise)…
And I would go even further. Search is rapidly growing beyond relevance as a metric of success to usefulness. Relevance is, and always has been, simply a measurable proxy for usefulness. Expect Google algos to start finding signals of usefulness, across multiple content buckets, and using that to determine what gets shown when, and to whom.
So, more and more, SEO and prospect intent have to align and chasing algos becomes moot. I think we have to worry much less about systematic testing against a black box algo and worry more about understanding what our prospects want to do. Thatβs where the search engines have to head.
Eric Enge:
Thad – great restatement of what I was saying.
We all need to remember where Google (and Bing) are going.Β They want high quality content.Β Over time, they WILL get it.Β Winning the “relevant experience” war will help you build great traffic now, and secure your business from the inherent risks of changes in Google’s algorithm (because those changes will likely be a positive for you)
Rand Fishkin:
I’m going to, oddly enough, say that I disagree with a few of these statements.
Much as I would love to believe the engines will eventually reverse into signals that push higher quality content above more popular content, I don’t think that will ever be the case.
Every other field is the same – it’s not the fantastic, artistic, often foreign-language, personally compelling films that win Oscars or sell big at the box office. It’s not the authentic, possibly awkward, but highly dedicated, humble and talented politicians who win elections. It’s not the news with the most substance, science and accuracy that earns the front page headlines. In every facet of human life – it’s what’s popular and what’s marketed.
I believe that as SEOs, we owe it to our clients to let them know that accessibility and quality are certainly bases they need to hit, but they won’t necessarily win the battles or the war, even in the long term.
As Google/Bing/etc turn to new signals, they’re looking at things like personalization, social search, Twitter data, usage data, etc. – these aren’t things that “can’t be gamed” or that predict “quality content” – they’re just like data points society uses to value films, politicians and news stories. That’s why my belief is that SEO isn’t about “great” content or “the most useful” content. It’s about the “most marketable” content targeted to demographics that are likely to fulfill the search engines’ signals. Today, that’s those on the web who create links. Tomorrow it could be those who tweet and share on Facebook. In years to come, it might be a wider swath of web users, but they will still be influence-able the way humans always are – through psychologies that persuade them to take action in the kinds of ways the engines measure.
I’ll ask a final question – does anyone here believe that the highest converting landing page is the one that does the best job explaining the product or the one that taps into the science of persuasion (social proof, ego, scarcity, etc.)?
At the 30K foot level, I think Google is about representing popularity and relevance on the web the same way it’s done in real life. They’re not trying to re-invent the way humans consider/judge/evaluate content.
The above is, of course, opinion.
Gord Hotchkiss:
I think youβre right Rand…increasing, Google will try to pick up sociological and βhumanβ based signals, rather than arbitrary semantic calculations. If you think about PageRank, itβs really a network based signal based on what they had to work with at the time, hyperlinking structures. Today, we have social networks and Iβm sure there are a few people at Google smart enough to determine emergent behaviors out of the complexity of that network structure β SocialRank.
The second piece of this is personalization..identifying context relevant tasked based intent, and matching the network wide signals to that. Again, difficult to optimize against this…no universally true baseline to test against!
So, with the absence of a consistent and testable environment, we have no option but to switch our focus to people instead. If thatβs where Google is going (and I know Microsoft is heading in that direction), we have to be going there too….
Richard Zwicky:
I’d disagree that there are no universal baselines, nor is it the best quality content, nor the most content that drives this.
Actually, I think that in its own way, Google always has tried to pick up on sociological and human based signals. Β The reality is that in the past, the dimensions for input were quite flat, and that allowed us to consider things two-dimensionally: Β Very simply; the site, and other sites that linked in, with just a little outside input.
The data points being examined were finite, and relatively easy to manipulate. Β As the networks have grown, and the ability and manners in which people have interacted has changed, so have a lot of the notions. Β Social networks are a dimension which doesn’t necessarily connect directly to any one site at any time, but the activity therein sends very definite market signals about complex behaviour patterns globally which can be used to alter the algorithmic concepts of relevance.
I’d disagree that there are no baselines to test against, or optimize against. Β It’s just the field of perspective to provide the analysis is different. Trends, baselines and norms are hard to determine on the individual, or even among small groups, but norms can be established over time, contextual variances defined, and then norms applied to other new or unique segments.
I would argue that the change in signal measurement is analogous to the change in communities that’s occurred in the last 200 years. (in North America) Β As you move through these periods, signals, outreach, measurement all change, as did the tools of marketing. Here’s a very short synopsis, to give you an idea of my perspective….
200 years ago, most people were born, raised, and died within 25 miles of the same place. Β Very few people ventured out, went away to school, etc… Β This was your community. Β You were raised with, worked with, and socialized with the same group of people. Β Their interests were your interests. Β Any wonder there was a caste / class system?
~150 years ago, rail networks were established, and movement increased. Β People traveled, but not too distantly, and usually only to hubs. Β Your community expanded a little, but not much. Β But you were exposed to more and more.
~100 years ago the automobile age started. Β People now traveled through a larger area, their regular range of movement grew to a ~100 mile radius. Β Now, you often were working with people you’d not encountered while growing up, your children were traveling further and further away to school, and your community was different based on interests.
~1945 – The modern automobile age began. Β Now working 2 hours away from home was “normal” (funny how the Internet’s changing that part back!). Β Your home community was distinct from work. Β Husband and wife each had different communities and interactions during the day. Β Signals became much noisier. Β Marketing had to become more sophisticated. Messaging bounced around more.
~194X – Telephones in every home became common (not that long ago!) – first “Buzz marketing” Β ?? Β Still individual to individual…
~1960 – Televisions in every home became common… mass visual communication, and marketing.
~197X – The IT age starts, you know how this goes….
Communities? Β Nothing like they even were when I was growing up.
Today, like most of yours, Β mine is global, not local. It’s based on a huge range of interests, and people I’ve encountered globally through my life. Β I don’t have a single community I participate in regularly, I have many. Β I fade in and out from time to time as interest grows and fades. Β The buzz in one community is generally on different topics from one to another, and yet there are consistent common threads through all of them, no matter how disconnected.Β Β
Marketing, measuring, and responding the way a search engine needs to? It needs to monitor all the signals, across all communities, and understand how contextual relevance shifts. Β In essence, if I use the above analogy, the signals the engines used to monitor would be akin to where we were in community evolution somewhere between 100 years ago and 1945. Β The dimensions to be measured, and factored in are so far beyond how most traditional marketers think is unfathomable (which is this group’s opportunity).
Chris Baggott:
This is an opinion I agree with. Β My only point from earlier has to do with popularity….compared to what? Β It’s a lot easier to be “popular” in a smaller pond. Β Β π
Will Critchlow:
I’m a bit late to this party. A couple of late thoughts:
Firstly, I thought today’s xkcd was appropriate:
Secondly, I think that a lot rests on how we define ‘great content’. However we define it, I think Rand is correct that it cannot (except in rare cases) be sufficient – at a minimum it needs a strategy of repeated delivery that leads to enough of a following to bring the links it needs. I would like to bundle up a degree of ‘linkability’ into the definition of ‘great content’ though. Rand – I think your definition (where you compare it to great artwork, or honest politicians) is too narrow. I believe that ‘great’ in this context can be defined as the right combination of populist within the right niche, remarkable (in the Seth Godin sense of “likely to be remarked-upon”) as well as the purer content metrics.
Finally, I was thinking about this in the context of the Mayday update, when it seems to me that we saw a change in the relative likelihood of content to succeed depending on where it appears. Best estimates show that we saw a move from long-tail rankings for content on large, powerful domains to rankings for smaller, more niche domains. Although this kind of relative change is nothing new, it is a timely reminder that content doesn’t operate in a vacuum.
I think Will’s actually doneΒ a remarkable job summing things up – it all depends what we mean by “great”Β content and how we think about the evaluation of that word by all the signals the engines measure today and might measure tomorrow.
Hopefully, this debate has been valuable to you – we felt, after looking back through the thread, that there was a lot of great stuff that deserved wider review and more thought. We’d all love to hear what you’ve got to say/share on the subject.